124 research outputs found
New Guarantees for Blind Compressed Sensing
Blind Compressed Sensing (BCS) is an extension of Compressed Sensing (CS)
where the optimal sparsifying dictionary is assumed to be unknown and subject
to estimation (in addition to the CS sparse coefficients). Since the emergence
of BCS, dictionary learning, a.k.a. sparse coding, has been studied as a matrix
factorization problem where its sample complexity, uniqueness and
identifiability have been addressed thoroughly. However, in spite of the strong
connections between BCS and sparse coding, recent results from the sparse
coding problem area have not been exploited within the context of BCS. In
particular, prior BCS efforts have focused on learning constrained and complete
dictionaries that limit the scope and utility of these efforts. In this paper,
we develop new theoretical bounds for perfect recovery for the general
unconstrained BCS problem. These unconstrained BCS bounds cover the case of
overcomplete dictionaries, and hence, they go well beyond the existing BCS
theory. Our perfect recovery results integrate the combinatorial theories of
sparse coding with some of the recent results from low-rank matrix recovery. In
particular, we propose an efficient CS measurement scheme that results in
practical recovery bounds for BCS. Moreover, we discuss the performance of BCS
under polynomial-time sparse coding algorithms.Comment: To appear in the 53rd Annual Allerton Conference on Communication,
Control and Computing, University of Illinois at Urbana-Champaign, IL, USA,
201
RPCA-KFE: Key Frame Extraction for Consumer Video based Robust Principal Component Analysis
Key frame extraction algorithms consider the problem of selecting a subset of
the most informative frames from a video to summarize its content.Comment: This paper has been withdrawn by the author due to a crucial sign
error in equation
Strong-Weak Integrated Semi-supervision for Unsupervised Single and Multi Target Domain Adaptation
Unsupervised domain adaptation (UDA) focuses on transferring knowledge
learned in the labeled source domain to the unlabeled target domain. Despite
significant progress that has been achieved in single-target domain adaptation
for image classification in recent years, the extension from single-target to
multi-target domain adaptation is still a largely unexplored problem area. In
general, unsupervised domain adaptation faces a major challenge when attempting
to learn reliable information from a single unlabeled target domain. Increasing
the number of unlabeled target domains further exacerbate the problem rather
significantly. In this paper, we propose a novel strong-weak integrated
semi-supervision (SWISS) learning strategy for image classification using
unsupervised domain adaptation that works well for both single-target and
multi-target scenarios. Under the proposed SWISS-UDA framework, a strong
representative set with high confidence but low diversity target domain samples
and a weak representative set with low confidence but high diversity target
domain samples are updated constantly during the training process. Both sets
are fused to generate an augmented strong-weak training batch with
pseudo-labels to train the network during every iteration. The extension from
single-target to multi-target domain adaptation is accomplished by exploring
the class-wise distance relationship between domains and replacing the strong
representative set with much stronger samples from peer domains via peer
scaffolding. Moreover, a novel adversarial logit loss is proposed to reduce the
intra-class divergence between source and target domains, which is
back-propagated adversarially with a gradient reverse layer between the
classifier and the rest of the network. Experimental results based on three
benchmarks, Office-31, Office-Home, and DomainNet, show the effectiveness of
the proposed SWISS framework
Kovalenko's Full-Rank Limit and Overhead as Lower Bounds for Error-Performances of LDPC and LT Codes over Binary Erasure Channels
We present Kovalenko's full-rank limit as a tight lower bound for decoding
error probability of LDPC codes and LT codes over BEC. From the limit, we
derive a full-rank overhead as a lower bound for stable overheads for
successful maximum-likelihood decoding of the codes.Comment: A short version of this paper was presented at ISITA 2008, Auckland
NZ. The first draft was submitted to IEEE Transactions on Information Theory,
2008/0
- …